Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Life Institute"


25 mentions found


Richard Branson and other public figures have signed an open letter warning of AI risks. The letter, issued by The Elders and the Future of Life Institute, urges world leaders to take action. The letter highlights risks, including the climate crisis, pandemics, nuclear weapons, and AI. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . AdvertisementRichard Branson and the grandson of J. Robert Oppenheimer are among the signatories of an open letter warning of the risks of uncontrolled AI.
Persons: Richard Branson, , J, Robert Oppenheimer Organizations: The Elders, Life Institute, Service, Business
Richard Branson believes the environmental costs of space travel will "come down even further." Dozens of high-profile figures in business and politics are calling on world leaders to address the existential risks of artificial intelligence and the climate crisis. Virgin Group founder Richard Branson, along with former United Nations General Secretary Ban Ki-moon, and Charles Oppenheimer — the grandson of American physicist J. Robert Oppenheimer — signed an open letter urging action against the escalating dangers of the climate crisis, pandemics, nuclear weapons, and ungoverned AI. Signatories called for urgent multilateral action, including through financing the transition away from fossil fuels, signing an equitable pandemic treaty, restarting nuclear arms talks, and building global governance needed to make AI a force for good. The letter was released on Thursday by The Elders, a nongovernmental organization that was launched by former South African President Nelson Mandela and Branson to address global human rights issues and advocate for world peace.
Persons: Richard Branson, Ban, Charles Oppenheimer —, J, Robert Oppenheimer —, Nelson Mandela, Branson, MIT cosmologist Max Tegmark, Jaan Tallinn Organizations: Virgin Group, United Nations, Elders, South, Life Institute, MIT, Skype
Foundation models like the one built by Microsoft (MSFT.O)-backed OpenAI are AI systems trained on large sets of data, with the ability to learn from new data to perform various tasks. In a meeting of the countries' economy ministers on Oct. 30 in Rome, France persuaded Italy and Germany to support a proposal, sources told Reuters. Until then, negotiations had gone smoothly, with lawmakers making compromises across several other conflict areas such as regulating high-risk AI, sources said. France-based AI company Mistral and Germany's Aleph Alpha have criticised the tiered approach to regulating foundation models, winning support from their respective countries. Other pending issues in the talks include definition of AI, fundamental rights impact assessment, law enforcement exceptions and national security exceptions, sources told Reuters.
Persons: Carlos Barria, Thierry Breton, Geoffrey Hinton, Alpha, Mistral, Mark Brakel, Supantha Mukherjee, Josephine Mason, Alexander Smith Organizations: Technology, Intelligence, REUTERS, Rights, Reuters, Foundation, Microsoft, European Commission, Mistral, Lawmakers, Life Institute, Thomson Locations: San Francisco, California, U.S, Rights STOCKHOLM, BRUSSELS, LONDON, France, Germany, Italy, Rome, Spain, Belgium, Stockholm
Generative AI still mostly experimental, say executives
  + stars: | 2023-11-09 | by ( Katie Paul | ) www.reuters.com   time to read: +4 min
NEW YORK, Nov 9 (Reuters) - One year after the debut of ChatGPT created a global sensation, leaders of business, government and civil society said at the Reuters NEXT conference in New York that generative AI technology is still mostly in an experimental stage, with limited exceptions. Aguirre cited self-driving cars as an example of a technology struggling to make the transition to full deployment. “I’ve observed many generative AI applications that are in production while other customers are just beginning their journey.”One way generative AI was already being deployed widely, highlighted by speakers across industries, was to write computer code. Gary Marcus, a professor at New York University, said generative AI was error-prone in coding just like in other areas, but that the problem was less of a hindrance in the tech sector because programmers knew how to troubleshoot it. Companies should move slowly and deliberately when integrating the technology into uses where accuracy matters, executives emphasized.
Persons: ChatGPT, What's, Anthony Aguirre, Aguirre, Sherry Marcus, I’ve, Lili Cheng, Copilot, Cheng, Gary Marcus, Marcus, Cisco's Vijoy Pandey, Pandey, Katie Paul, David Gregorio Our Organizations: Reuters NEXT, Life Institute, Microsoft Corporate, Reuters, New York University, Thomson Locations: New York
[1/2] Artificial Intelligence words are seen in this illustration taken March 31, 2023. Companies are increasingly using AI to make decisions including about pricing, which could lead to discriminatory outcomes, experts warned at the conference. “We should not underestimate how powerful these models are now and how rapidly they are going to get more powerful,” he said. Developing ever-more powerful AI will also risk eliminating jobs to a point where it may be impossible for humans to simply learn new skills and enter other industries. “Once that happens, I fear that it's not going to be so easy to go back to AI being a tool and AI as something that empowers people.
Persons: Dado Ruvic, ” Gary Marcus, Marcus, Marta Tellado, Anthony Aguirre, , , Anna Tong, Lisa Shumaker Organizations: REUTERS, Reuters NEXT, New York University, Companies, Consumer, Life Institute, Reuters, reuters, Thomson Locations: New York, San Francisco
LONDON, Oct 31 (Reuters) - Britain will host the world's first global artificial intelligence (AI) safety summit this week to examine the risks of the fast-growing technology and kickstart an international dialogue on regulation of it. The aim of the summit is to start a global conversation on the future regulation of AI. Currently there are no broad-based global regulations focusing on AI safety, although some governments have started drawing up their own rules. A recent Financial Times report said Sunak plans to launch a global advisory board for AI regulation, modeled on the Intergovernmental Panel on Climate Change (IPCC). When Sunak announced the summit in June, some questioned how well-equipped Britain was to lead a global initiative on AI regulation.
Persons: Olaf Scholz, Justin Trudeau –, Kamala Harris, Ursula von der Leyen, Wu Zhaohui, Antonio Guterres, James, Demis Hassabis, Sam Altman, OpenAI, Elon Musk, , Stuart Russell, Geoffrey Hinton, Alan Turing, Rishi Sunak, Sunak, Joe Biden, , Martin Coulter, Josephine Mason, Christina Fincher Organizations: Bletchley, WHO, Canadian, European, United Nations, Google, Microsoft, HK, Billionaire, Alan, Alan Turing Institute, Life, European Union, British, EU, UN, Thomson Locations: Britain, England, Beijing, British, Alibaba, United States, China, U.S
Technologists and advocates are again set to visit Capitol Hill on Tuesday to discuss with Senate leaders the perils and promises of artificial intelligence. Venture capitalists Marc Andreessen, co-founder and general partner of Andreessen Horowitz, and John Doerr, chair of Kleiner Perkins, will be among the 21 attendees at the second AI Insights Forum hosted by Senate Majority Leader Chuck Schumer, D-N.Y., according to a spokesperson for his office. The session is a continuation of the Majority Leader's effort to get the chamber up to speed on AI to determine how best to approach AI regulation. For example, Future of Life Institute President Max Tegmark is also set to attend. Other tech leaders such as Micron Executive Vice President Manish Bhatia, Revolution CEO Steve Case, Stripe CEO Patrick Collison and Cohere CEO Aidan Gomez will be in attendance.
Persons: Marc Andreessen, Andreessen Horowitz, John Doerr, Kleiner Perkins, Chuck Schumer, Andreessen, Max Tegmark, Elon Musk, Manish Bhatia, Steve Case, Patrick Collison, Aidan Gomez, Derrick Johnson, Amanda Ballantyne, Satya Nadella, Bill Gates, Mark Zuckerberg, Sundar Pichai, Sam Altman Organizations: Capitol, Senate, China, Life, Life Institute, Tesla, Space X, Micron, NAACP, AFL, Technology, Microsoft, Google, CNBC, YouTube Locations: coders, India
DeepMind's Mustafa Suleyman recently talked about setting boundaries on AI with the MIT Tech Review. "You wouldn't want to let your little AI go off and update its own code without you having oversight," he told the MIT Technology Review. Last year, Suleyman cofounded AI startup, Inflection AI, whose chatbot Pi is designed to be a neutral listener and provide emotional support. Suleyman told the MIT Technology Review that though Pi is not "as spicy" as other chatbots it is "unbelievably controllable." And while Suleyman told the MIT Technology Review he's "optimistic" that AI can be effectively regulated, he doesn't seem to be worried about a singular doomsday event.
Persons: DeepMind's Mustafa Suleyman, Mustafa Suleyman, Suleyman, there's, Sam Altman, Elon Musk, Mark Zuckerberg, — Suleyman, Pi, Hassabis, Satya Nadella, Geoffrey Hinton, Yoshua Organizations: MIT Tech, Service, MIT Technology, AIs, Life Institute Locations: Wall, Silicon, Washington
Palmer Luckey told Breaking Defense the ChatGPT hype is making politicians interested in AI weapons. While Luckey may be best known as the founder of Oculus, in 2017 he created a defense tech startup called Anduril Industries. In a recent interview with Breaking Defense, Luckey said "ChatGPT has probably been more helpful to Anduril with customers and politicians than any technology in the last 10 years." Luckey, who referred to Anduril as an "AI company," clarified to Breaking Defense that ChatGPT wasn't actually powering Anduril's products. It builds military technology including drones, surveillance towers, and underwater vehicles powered by its AI software system, Lattice.
Persons: Palmer Luckey, Luckey, he's, that's, Palmer, ChatGPT, you'll, futher, Anduril, Andreessen Horowitz, Lockheed Martin, Trump, we've Organizations: Breaking Defense, Capitol, Pentagon, Service, Anduril Industries, Defense, Blue Force Technologies, TechCrunch, Founders Fund, Boeing, Lockheed, Department of Defense, Department of Homeland Security, CNBC, Special, Command, US Customs, Protection, Jaan, Skype, Cambridge Centre, Life Locations: Wall, Silicon, Jaan Tallinn
The Air Force has requested $5.8 billion in its budget to create AI-driven XQ-58A Valkyrie aircraft. The autonomous crafts are ideal for completing suicide missions and protecting pilots, the Air Force says. Human rights advocates say letting technology take lives crosses a moral boundary. The Times reported each Valkyrie will cost between $3 million and $25 million — far less than a manned pilot jet. Air Force and Department of Defense representatives did not immediately respond to Insider's request for comment.
Persons: , Mary Wareham, António Guterres Organizations: Air Force, Service, Force, New York Times, Times, The Times, Department of Defense, Human Rights Watch, Life Institute, United Nations Locations: Wall, Silicon, of Mexico
Max Tegmark has long believed in the promise of artificial intelligence. As a physicist and AI researcher at the Massachusetts Institute of Technology and a co-founder of the Future of Life Institute, which studies technology, he has envisioned a near future in which superintelligent computers could fight climate change, find cures for cancer and generally solve our thorniest problems. As long as proper safety standards are in place, he argued, “the sky’s the limit.”
Persons: Max Tegmark Organizations: Massachusetts Institute of Technology, Life Institute
Jaan Tallinn helped build Skype and is the founder of the Future of Life Institute. He recently warned of the risks of an AI arms race, describing theoretical anonymous "slaughterbots." As AI technology develops, Tallinn is especially afraid of the implications that military use might have for the future of AI. When contacted by Insider, the Future of Life Institute told Insider it agreed with Tallinn's remarks on his fears of weaponized AI. Now AI researchers, tech moguls, celebrities, and regular people alike are worried.
Persons: Jaan, Al Jazeera, Tallinn's, Elon Musk, Christopher Nolan, Steve Wozniak, Emad Mostaque, Musk, Insider's Kali Hays, Organizations: Skype, Life Institute, Jaan, Al, Cambridge Centre, Elon, Apple Locations: Jaan Tallinn, Estonian, Tallinn
We need to do the proper basic maintenance and we need regulation." Regulation is already being prepared in several countries to tackle issues around AI. The European Union's proposed AI Act, for example, would classify AI applications into different risk levels, banning uses considered "unacceptable" and subjecting "high-risk" applications to rigorous assessments. Developing ever-more powerful AI will also risk eliminating jobs to a point where it may be impossible for humans to simply learn new skills and enter other industries. "It seems like the most obvious thing in the world not to put AI into nuclear command and control," he said.
Persons: Michelai Graham, Richard Sonnenblick, Seth Dobrin, Read, Ian Swanson, Dobrin, Anthony Aguirre, , Sheila Dang, Deepa Babington Organizations: Artificial Intelligence, Protect, Regulators, AI Institute, Life Institute, Thomson Locations: Plainview, Austin, AUSTIN, United States
An algal bloom near southern California beaches is causing sea lions to act unpredictably. It's also causing the sea lions to give birth to stillborn pups, a marine mammal expert said. For the past month, beachgoers have spotted sea lions across Southern California's coastlines — from Ventura to San Diego counties — exhibiting peculiar behavior. The cause is a toxic algal bloom that experts have told Insider is the "worst outbreak" in Southern California yet. Sea lions rest at the Marine Mammal Care Center facility.
Persons: It's, unpredictably, John Warner, Warner, we've, There's Organizations: Service, Marine Mammal Care, Mammal Care, beachgoers, Atmospheric Administration, NOAA, Fisheries, Channel Islands Marine & Wildlife Institute, USA, Warner, Los Angeles Unified School District, Environmental, Group, ABC News, California's Locations: California, Southern California, Southern, Ventura, San Diego, Los Angeles, Santa Barbara
California's Central Coast beaches are littered with dead dolphins and sea lions this summer. A sick sea lion was roped off by rescue workers to prevent people from approaching. As of June 27, the update read, CIMWI has responded to over 500 live sea lions exhibiting signs of domoic acid and over 150 dead sea lions in Santa Barbara and Ventura counties. A warning message is written in sand to prevent people from approaching a sick sea lion on the beach. Katherine Tangalakis-LippertMortality rates with adult sea lions have been "significant," according to CIMWI, and animals are dying despite receiving treatment.
Persons: , Sam Dover, CIMWI, Dover, Katherine Tangalakis, Ventura Organizations: Service, Islands Marine & Wildlife Institute, Channel Islands Marine and Wildlife Institute, NOAA Fisheries, Channel Islands Marine and Wildlife, Santa Barbara Locations: California, Southern California, Santa Barbara, Ventura counties, Carpinteria , California, Santa
[1/5] A sick sea lion is marked with paint and left on a beach, unable to be rescued due to overcrowded facilities, as toxic algae is being blamed for causing sickness to sea lions and dolphins along the coast of Southern California, in Redondo Beach, California, U.S., June 23, 2023. Experts say a recent outbreak of algae bloom - commonly known as red tide - has sickened and killed an unknown number of sea lions and dolphins. Marine biologists are paying close attention because they consider sea lions a sentinel species - animals that can help identify environmental risks to humans. The Channel Islands Marine & Wildlife Institute reported 1,000 sightings of sick and dead marine mammals from June 8 through 14. Sea lions are a fixture on many California beaches, sunning on the shoreline, barking at each other, and sometimes looking for an easy meal from tourists.
Persons: Mike Blake, John Warner, Warner, Omar Younis, Daniel Trotta, Lincoln Organizations: REUTERS, Mammal Care, Channel Islands Marine & Wildlife Institute, Thomson Locations: Southern California, Redondo Beach , California, U.S, ANGELES, California, Los Angeles, San Pedro, Hermosa Beach
There's a chance that AI development could get "catastrophic," Yoshua Bengio told The New York Times. "Today's systems are not anywhere close to posing an existential risk," but they could in the future, he said. "Today's systems are not anywhere close to posing an existential risk," Yoshua Bengio, a professor at the Université de Montréal, told the publication. Marc Andreessen spoke even more strongly in a blog post last week in which he warned against "full-blown moral panic about AI" and described "AI risk doomers" as a "cult." "AI doesn't want, it doesn't have goals, it doesn't want to kill you, because it's not alive," he wrote.
Persons: There's, Yoshua Bengio, there's, Montréal, Bengio, Anthony Aguirre, Microsoft Bing, It's, Aguirre, Elon Musk, Steve Wozniak, Anthropic, Eric Schmidt, Bill Gates, Marc Andreessen, it's, Andreessen Organizations: New York Times, Morning, University of California, Times, Microsoft, Life Institute, Bengio, Apple, Center, AI Safety Locations: Santa Cruz
Palantir's boss Alex Karp opposes the idea of a pause in artificial intelligence research, in contrast to an open letter from the Future of Life Institute signed by some of the biggest names in the tech industry. The letter, which has garnered over 31,000 signatures including names like Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, called for a pause on AI research on models larger than GPT-4, which powers tools such as ChatGPT. The letter also said that if "such a pause cannot be enacted quickly, governments should step in and institute a moratorium." Speaking to BBC Radio in an interview broadcast Thursday, Karp said he is of the view that "many of the people asking for a pause, are asking for a pause because they have no product." To him, "studying this and allowing other people to win both on commercial areas and on the battlefield" is a really bad strategy.
Persons: Alex Karp, Elon Musk, Steve Wozniak, Karp Organizations: Life, Apple, BBC Radio
Sam Altman said he worried creating ChatGPT was "something really bad" given the risks AI poses. OpenAI CEO Sam Altman has admitted to losing his sleep over the dangers of his creation ChatGPT. In a conversation during a recent trip to India, Sam Altman said he worries the over the idea that he may have done "something really bad" by creating ChatGPT, which was released in November and sparked a surge of interest in AI. The risks are highA number of tech leaders and government officials have raised concerns about the pace of development of AI platforms. Earlier this month, Altman was among more than 350 scientists and tech leaders who signed a statement expressing deep concern over AI risks.
Persons: Sam Altman, Satyan Gajwani, Altman, Elon Musk, Steve Wozniak Organizations: Times Internet, Morning, Economic Times, Life Institute, Elon, Apple Locations: New Delhi, Israel, Jordan, Qatar, UAE, India, South Korea
There's a sense of déjà vu, similar to the digital revolution, and it's led to a generational divide. Tech leaders are also worried. While in many instances these digital innovations brought people closer, helped families bond, and gave people a tool to voice their feelings and opinions, for many others, it created a divide between generations — a digital divide. The letter read: "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This can result in a digital divide, where some people have limited access to technology or lack the necessary digital skills to effectively engage with AI systems and other emerging technologies.
Persons: It's, it's, Bard, , haven't, Mona Lisa, Elon Musk, Steve Wozniak, Sam Altman, OpenAI, Bill Gates, aren't, Spriha Srivastava Organizations: Tech, Facebook, Twitter, Adobe, Life Institute, Elon, Apple
May 17 (Reuters) - The swift growth of artificial intelligence technology could put the future of humanity at risk, according to most Americans surveyed in a Reuters/Ipsos poll published on Wednesday. More than two-thirds of Americans are concerned about the negative effects of AI and 61% believe it could threaten civilization. ChatGPT has kicked off an AI arms race, with tech heavyweights like Microsoft (MSFT.O) and Google (GOOGL.O) vying to outdo each other's AI accomplishments. The Reuters/Ipsos poll found that the number of Americans who foresee adverse outcomes from AI is triple the number of those who don't. Those who voted for Donald Trump in 2020 expressed higher levels of concern; 70% of Trump voters compared to 60% of Joe Biden voters agreed that AI could threaten humankind.
Microsoft unveiled new versions of its Bing internet-search engine and Edge browser powered by the newest technology from ChatGPT maker OpenAI. Of the 50 companies on this year's list, 21 told us that AI is critically important to more than 50% of their revenue. Half of the companies in the top 10 of the 2023 CNBC Disruptor 50 list feature key use of AI, and notably, they represent a diverse range of industries and use cases. OpenAI CEO Sam Altman speaks during a keynote address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, Feb. 7, 2023. The call to slow down is, in fact, less safe than what they're proposing," he said, referring to OpenAI and Altman.
“I’m just a scientist who suddenly realized that these things are getting smarter than us,” Hinton told CNN’s Jake Tapper in an interview on Tuesday. Apple co-founder Steve Wozniak, who was one of the signatories on the letter, appeared on “CNN This Morning” on Tuesday, echoing concerns about its potential to spread misinformation. “Tricking is going to be a lot easier for those who want to trick you,” Wozniak told CNN. Hinton, for his part, told CNN he did not sign the petition. “It’s not clear to me that we can solve this problem,” Hinton told Tapper.
Microsoft's chief scientific officer says he disagrees with people calling for a pause on AI development, including Elon Musk. Eric Horvitz told Fortune it's "reasonable" for people to be concerned but that we need to "jump in, as opposed to pause." Microsoft's chief scientific officer has addressed an open letter signed by Elon Musk and thousands of others calling for a pause on AI development. We need to really just invest more in understanding and guiding and even regulating this technology—jump in, as opposed to pause." Musk cofounded OpenAI in 2015 alongside Altman and others but left the board in 2018.
He worked part-time at Google for a decade on the tech giant’s AI development efforts, but he has since come to have concerns about the technology and his role in advancing it. In a tweet Monday, Hinton said he left Google so he could speak freely about the risks of AI, rather than because of a desire to criticize Google specifically. “I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton said in a tweet. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. Obviously, I no longer think that.”Even before stepping aside from Google, Hinton had spoken publicly about AI’s potential to do harm as well as good.
Total: 25